Goto

Collaborating Authors

 algorithm aversion


A funny companion: Distinct neural responses to perceived AI- versus human-generated humor

Rao, Xiaohui, Wu, Hanlin, Cai, Zhenguang G.

arXiv.org Artificial Intelligence

As AI companions become capable of human-like communication, including telling jokes, understanding how people cognitively and emotionally respond to AI humor becomes increasingly important. This study used electroencephalography (EEG) to compare how people process humor from AI versus human sources. Behavioral analysis revealed that participants rated AI and human humor as comparably funny. However, neurophysiological data showed that AI humor elicited a smaller N400 effect, suggesting reduced cognitive effort during the processing of incongruity. This was accompanied by a larger Late Positive Potential (LPP), indicating a greater degree of surprise and emotional response. This enhanced LPP likely stems from the violation of low initial expectations regarding AI's comedic capabilities. Furthermore, a key temporal dynamic emerged: human humor showed habituation effects, marked by an increasing N400 and a decreasing LPP over time. In contrast, AI humor demonstrated increasing processing efficiency and emotional reward, with a decreasing N400 and an increasing LPP. This trajectory reveals how the brain can dynamically update its predictive model of AI capabilities. This process of cumulative reinforcement challenges "algorithm aversion" in humor, as it demonstrates how cognitive adaptation to AI's language patterns can lead to an intensified emotional reward. Additionally, participants' social attitudes toward AI modulated these neural responses, with higher perceived AI trustworthiness correlating with enhanced emotional engagement. These findings indicate that the brain responds to AI humor with surprisingly positive and intense reactions, highlighting humor's potential for fostering genuine engagement in human-AI social interaction.


Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Bohlen, Lasse, Kruschel, Sven, Rosenberger, Julian, Zschech, Patrick, Kraus, Mathias

arXiv.org Artificial Intelligence

Previous work has shown that allowing users to adjust a machine learning (ML) model's predictions can reduce aversion to imperfect algorithmic decisions. However, these results were obtained in situations where users had no information about the model's reasoning. Thus, it remains unclear whether interpretable ML models could further reduce algorithm aversion or even render adjustability obsolete. In this paper, we conceptually replicate a well-known study that examines the effect of adjustable predictions on algorithm aversion and extend it by introducing an interpretable ML model that visually reveals its decision logic. Through a pre-registered user study with 280 participants, we investigate how transparency interacts with adjustability in reducing aversion to algorithmic decision-making. Our results replicate the adjustability effect, showing that allowing users to modify algorithmic predictions mitigates aversion. Transparency's impact appears smaller than expected and was not significant for our sample. Furthermore, the effects of transparency and adjustability appear to be more independent than expected.


Reputational Algorithm Aversion

Weitzner, Gregory

arXiv.org Artificial Intelligence

People are often reluctant to incorporate information produced by algorithms into their decisions, a phenomenon called ``algorithm aversion''. This paper shows how algorithm aversion arises when the choice to follow an algorithm conveys information about a human's ability. I develop a model in which workers make forecasts of an uncertain outcome based on their own private information and an algorithm's signal. Low-skill workers receive worse information than the algorithm and hence should always follow the algorithm's signal, while high-skill workers receive better information than the algorithm and should sometimes override it. However, due to reputational concerns, low-skill workers inefficiently override the algorithm to increase the likelihood they are perceived as high-skill. The model provides a fully rational microfoundation for algorithm aversion that aligns with the broad concern that AI systems will displace many types of workers.


Hierarchical Neural Additive Models for Interpretable Demand Forecasts

Feddersen, Leif, Cleophas, Catherine

arXiv.org Artificial Intelligence

Demand forecasts are the crucial basis for numerous business decisions, ranging from inventory management to strategic facility planning. While machine learning (ML) approaches offer accuracy gains, their interpretability and acceptance are notoriously lacking. Addressing this dilemma, we introduce Hierarchical Neural Additive Models for time series (HNAM). HNAM expands upon Neural Additive Models (NAM) by introducing a time-series specific additive model with a level and interacting covariate components. Covariate interactions are only allowed according to a user-specified interaction hierarchy. For example, weekday effects may be estimated independently of other covariates, whereas a holiday effect may depend on the weekday and an additional promotion may depend on both former covariates that are lower in the interaction hierarchy. Thereby, HNAM yields an intuitive forecasting interface in which analysts can observe the contribution for each known covariate. We evaluate the proposed approach and benchmark its performance against other state-of-the-art machine learning and statistical models extensively on real-world retail data. The results reveal that HNAM offers competitive prediction performance whilst providing plausible explanations.


Basho in the machine: Humans find attributes of beauty and discomfort in algorithmic haiku -- ScienceDaily

#artificialintelligence

The gap between human creativity and artificial intelligence seems to be narrowing. Previous studies have compared AI-generated versus human-written poems and whether people can distinguish between them. Now, a study led by Yoshiyuki Ueda at Kyoto University Institute for the Future of Human and Society, has shown AI's potential in creating literary art such as haiku -- the shortest poetic form in the world -- rivaling that of humans without human help. Ueda's team compared AI-generated haiku without human intervention, also known as human out of the loop, or HOTL, with a contrasting method known as human in the loop, or HITL. The project involved 385 participants, each of whom evaluated 40 haiku poems -- 20 each of HITL and HOTL -- plus 40 composed entirely by professional haiku writers.


Humans and AI: Bargaining Power

#artificialintelligence

I have a confession to make--I'm a back-seat driver! When sitting in a taxi, I can't help but grumble when the ride isn't smooth, or the driver chooses the slowest lane of traffic. I have to fight the urge to take control. When it comes to shopping, I passively accept what is offered for sale. But my wife, who grew up in Asia where haggling is part of the culture, is different.


Humans and AI: The Bargaining Power of the Denominations

#artificialintelligence

AI achievement requires individuals, interaction, and innovation. You wanted a human-driven AI achievement plan. Configuration processes where people are expanded, not controlled and where individuals can impact results and settle on decisions even with a restricted arrangement of choices. By regarding human poise and enabling individuals to settle on their own decisions, you will have a smoother way to authoritative change, more exact choices, and more effective business results. Pick present day AI frameworks that can instinctively clarify their choices.


Explaining Our Extreme Safety Demands for Self-Driving Cars

#artificialintelligence

Self-driving cars promise to brighten our lives in multiple ways. Americans spend almost an hour daily, on average, commuting to and from work. While self-driving cars will not cut down on our time commuting, they will allow us to use that time in more productive or fun ways. Self-driving cars can also ease the lives of people unable to drive on their own due to old age or disability. Finally, research has shown that introducing self-driving cars, once they are at least 10 percent safer than the average driver, would spare hundreds of thousands of 1.25 million lives lost in traffic accidents each year. Despite the advantage of allowing self-driving cars on the road as soon as they are somewhat safer than the average human, there is significant public resistance toward introducing self-driving cars, unless they are extremely safe compared to the average human driver (e.g., 90 percent safer than the average human driver).


How to Get Business Leaders to Trust Algorithms

#artificialintelligence

The past decade has seen lightning-fast evolution in the possibilities of predictive analytics. Given the right data, machine learning (ML) algorithms can make forecasts much more accurately than a human expert. Even experienced high-level executives should be using those insights to inform their business decisions. This level of insight would not only make those decisions more accurate -- it would save executives an enormous amount of time. According to Accenture research, some organizations that deploy ML solutions see a tenfold (or more) increase in time savings.